Lie algebra action

Definition

The action of a Lie algebra $\mathfrak{g}$ on a manifold $M$ is a Lie algebra homomorphism

$$ \mathcal{A}: \mathfrak{g} \to \mathfrak{X}(M) $$

such that the map

$$ \mathfrak{g}\times M \to TM $$

defined by

$$ (v,p)\mapsto \mathcal{A}(v)_p \in T_p M $$

is smooth.

$\blacksquare$

Example

Given the group action of a Lie group $G$ on $M$ it induces a Lie algebra action via fundamental vector fields. But I think that the converse is not true. I guess there must be problems of "completeness". Maybe this is related to the notion of pseudogroup or local group of transformations (are they the same?).

Infinitesimal action

A Lie algebra action is also called an infinitesimal action. Why?

Let's consider the Lie algebra action induced by a Lie group action of $G$ over a manifold $M$. Given $V\in \mathfrak{g}$ we can identify it, loosely speaking, with

$$ e+\delta V \in G $$

if $\delta$ is small. In the same way, if $X_V$ is the fundamental vector field on $M$ then for $p\in M$ we can imagine the point $q$ that results of adding to $p$ a bit of $X_V(p)$, that is,

$$ q=p+\delta X_V(p)=p(e+\delta V) $$

We can think of all of this as the action on $M$ of elements of $G$ that live very near to the identity element, that of course when applied to $p\in M$ yields elements of $M$ that live near $p$.

Recovering of the group action

Surprisingly, the infinitesimal action let us recover the group action. Intuitively it goes in this way. Define

$$ T_{\delta}(p):=p(e+\delta V) $$

We can apply the idea of "adding a bit of $X_V$" successively, with smaller and smaller $\delta$s. This way

$$ T_{\frac{1}{N}}(T_{\frac{1}{N}}(T_{\frac{1}{N}}(...T_{\frac{1}{N}}(p))))=p\cdot (e+{\frac{1}{N}} V)^N \equiv p\cdot exp(V) $$

This is strongly related to exponential map#Motivation.

I think that this can be seen in the particular case of a matrix Lie group/ matrix Lie algebra as relation between the determinant and the trace.

Example

Let's see an example

When you have a relatively complicated active coordinate transformation, for example

$$ \begin{align*} x & \mapsto cos(\theta) x+sin(\theta) y\\ y & \mapsto -sin(\theta) x+cos(\theta) y \end{align*} $$

we can change $\theta$ by a small $\delta$ and take approximation to first order in $\delta$

So you can replace $cos(\delta)$ with $1$ and $sin(\delta)$ with $\delta$, first order approximation. So we obtain

$$ \begin{align*} x & \mapsto x+\delta y\\ y & \mapsto -\delta x+y \end{align*} $$

That can be rewrited as

$$ (x,y) \longmapsto (x,y)+\delta \cdot(y,-x) $$

This is justified because

$$ d\tau_e(\partial_{\delta})=X_{\partial_{\delta}}=(y,-x) $$

To understand this a bit more, let us restrict to $\mathbb{R}^2$ but with a general transformation $F_{\lambda}: \mathbb{R}^2 \mapsto \mathbb{R}^2$, and being $\lambda \in \mathbb{R}$ a parameter. It could be think as an action of the group $(\mathbb{R},+)$ over $\mathbb{R}^2$. So, for every $\lambda$:

$$ (x,y)\longmapsto F_{\lambda}(x,y)=(F^1_{\lambda}(x,y),F^2_{\lambda}(x,y)) $$

being $F_0=id$.

Since $\mathbb{R}^2$ is an affine space we can take the vector $(F^1_{\lambda}(x,y),F^2_{\lambda}(x,y))-(x,y)$. As a first order approximation we would assume:

$$ \left(\frac{F^1_{\lambda}(x,y)-x}{\lambda},\frac{F^2_{\lambda}(x,y)-y}{\lambda}\right)\cong \left(\frac{d F^1_{\lambda}(x,y)}{d\lambda}|_{\lambda=0},\frac{d F^2_{\lambda}(x,y)}{d\lambda}|_{\lambda=0}\right) $$

so the transformation, for a small $\lambda$, could be considered as

$$ (x,y)\longmapsto (x,y)+ \lambda\cdot \left(\frac{d F^1_{\lambda}(x,y)}{d\lambda}|_{\lambda=0},\frac{d F^2_{\lambda}(x,y)}{d\lambda}|_{\lambda=0}\right) $$

The approximation taken in \ref{aproximacion} could be interpreted like a Taylor expansion, since

$$ \begin{align*} x & \mapsto F^1_{\lambda}(x,y)=F^1_0+\deriv{F^1_{\lambda}}{\lambda}|_{\lambda=0}\cdot \lambda+\cdots\cong x+\deriv{F^1_{\lambda}}{\lambda}|_{\lambda=0}\cdot \lambda\\ y & \mapsto F^2_{\lambda}(x,y)=F^2_0+\deriv{F^2_{\lambda}}{\lambda}|_{\lambda=0}\cdot \lambda+\cdots\cong y+\deriv{F^2_{\lambda}}{\lambda}|_{\lambda=0}\cdot \lambda \end{align*} $$

as most physics books do.

Several considerations:

$$ X=\sum [\sum a_{ij}] x_i \partial x_j $$

we can identify $X$ with a matrix also named $X$ and

$$ exp(\delta X)\cdot m=e^{\delta X}\cdot m $$

(the LHS is the general exponential map and the RHS is the matrix exponential map).

For $dim(M)=1$ we recover the usual exponential map in $\mathbb{R}$.

$$ X(F)=0 $$

________________________________________

________________________________________

________________________________________

Author of the notes: Antonio J. Pan-Collantes

antonio.pan@uca.es


INDEX: